Independence, Expectations and the Moment Generating Function Independent Random Variables Recall that two events, A A A and B B B , are independent if,
P [ A ∩ B ] = P [ A ] P [ B ] P [A \cap B] = P[A] P[B] P [ A ∩ B ] = P [ A ] P [ B ]
Since the conditional probability of A A A given B B B is defined by:
P [ A ∣ B ] = P [ A ∩ B ] P [ B ] P [A|B] = \displaystyle\frac {P [A \cap B]} {P[B]} P [ A ∣ B ] = P [ B ] P [ A ∩ B ]
We see that A A A and B B B are independent if and only if
P [ A ∣ B ] = P [ A ] ( when P [ B ] > 0 ) P[A|B] = P[A] \quad (\text{when } P [B] > 0 ) P [ A ∣ B ] = P [ A ] ( when P [ B ] > 0 )
Two continuous random variables, X X X and Y Y Y , are similarly independent if,
P [ X ∈ A , Y ∈ B ] = P [ X ∈ A ] P [ Y ∈ B ] P [X \in A, Y \in B] = P [X \in A] P[Y \in B] P [ X ∈ A , Y ∈ B ] = P [ X ∈ A ] P [ Y ∈ B ]
Details Two continuous random variables, X X X and Y Y Y , are similarly independent if,
P [ X ∈ A , Y ∈ B ] = P [ X ∈ A ] P [ Y ∈ B ] P [X \in A, Y \in B] = P [X \in A] P[Y \in B] P [ X ∈ A , Y ∈ B ] = P [ X ∈ A ] P [ Y ∈ B ]
Now suppose X X X has p.d.f.
f X f_X f X , and Y Y Y has p.d.f.
f Y f_Y f Y .
Then,
P [ X ∈ A ] = ∫ A f X ( x ) d x , P [X \in A] = \displaystyle\int_{A} f_X (x) dx, P [ X ∈ A ] = ∫ A f X ( x ) d x ,
P [ Y ∈ B ] = ∫ B f Y ( y ) d y P [Y \in B] = \displaystyle\int_{B} f_Y (y) dy P [ Y ∈ B ] = ∫ B f Y ( y ) d y
So X X X and Y Y Y are independent if:
P [ X ∈ , Y ∈ B ] = ∫ A f X ( x ) d x ∫ B f Y ( y ) d y = ∫ A f X ( x ) ( ∫ B f Y ( y ) d y ) d x = ∫ A ∫ B f X ( x ) f Y ( y ) d y d x \begin{aligned} P [X \in, Y \in B] &= \displaystyle\int_{A} f_X (x) dx \displaystyle\int_{B} f_Y (y) dy \\ &= \displaystyle\int_{A}f_X (x) (\displaystyle\int_{B} f_Y (y) dy) dx \\ &= \displaystyle\int_{A}\displaystyle\int_{B} f_X (x)f_Y (y) dydx \end{aligned} P [ X ∈ , Y ∈ B ] = ∫ A f X ( x ) d x ∫ B f Y ( y ) d y = ∫ A f X ( x ) ( ∫ B f Y ( y ) d y ) d x = ∫ A ∫ B f X ( x ) f Y ( y ) d y d x But, if f f f is the joint density of X X X and Y Y Y then we know that
P [ X ∈ A , Y ∈ B ] P [X \in A, Y \in B] P [ X ∈ A , Y ∈ B ]
∫ A ∫ B f ( x , y ) d y d x \displaystyle\int_{A}\displaystyle\int_{B} f (x,y) dydx ∫ A ∫ B f ( x , y ) d y d x
Hence X X X and Y Y Y are independent if and only if we can write the joint density in the form of,
f ( x , y ) = f X ( x ) f Y ( y ) f(x,y) = f_X (x)f_Y (y) f ( x , y ) = f X ( x ) f Y ( y )
Independence and Expected Values If X X X and Y Y Y are independent random variables then E [ X Y ] = E [ X ] E [ Y ] E[XY]=E[X]E[Y] E [ X Y ] = E [ X ] E [ Y ] .
Further, if X X X and Y Y Y are independent random variables then E [ g ( X ) h ( Y ) ] = E [ g ( X ) ] E [ h ( Y ) ] E[g(X)h(Y)]=E[g(X)]E[h(Y)] E [ g ( X ) h ( Y )] = E [ g ( X )] E [ h ( Y )] is true if g g g and h h h are functions in which expectations exist.
Details If X X X and Y Y Y are random variables with a joint distribution function f ( x , y ) f(x,y) f ( x , y ) , then it is true that for h : R 2 → R h:\mathbb{R}^2\to\mathbb{R} h : R 2 → R we have
E [ h ( X , Y ) ] = ∫ ∫ h ( x , y ) f ( x , y ) d x d y E[h(X,Y)]=\displaystyle\int\displaystyle\int h(x,y)f(x,y)dxdy E [ h ( X , Y )] = ∫∫ h ( x , y ) f ( x , y ) d x d y
for those h h h such that the integral on the right exists
Suppose X X X and Y Y Y are independent continuous random variables, then
f ( x , y ) = f X ( x ) f Y ( y ) f(x,y) = f_X (x) f_Y (y) f ( x , y ) = f X ( x ) f Y ( y )
Thus,
E [ X Y ] = ∫ ∫ x y f ( x , y ) d x d y = ∫ ∫ x y f X ( x ) f Y ( y ) d x d y = ∫ x f X ( x ) d x ∫ y f Y ( y ) d y = E [ X ] E [ Y ] \begin{aligned} E[XY] &= \displaystyle\int\displaystyle\int xy f (x,y) dxdy \\ &= \displaystyle\int\displaystyle\int xy f_X (x) f_Y (y) dxdy \\ &= \displaystyle\int xf_X (x) dx \displaystyle\int yf_Y (y) dy \\ &= E[X] E[Y] \end{aligned} E [ X Y ] = ∫∫ x y f ( x , y ) d x d y = ∫∫ x y f X ( x ) f Y ( y ) d x d y = ∫ x f X ( x ) d x ∫ y f Y ( y ) d y = E [ X ] E [ Y ] Note that if X X X and Y Y Y are independent then E [ h ( X ) g ( Y ) ] = E [ h ( X ) ] E [ g ( Y ) ] E[h(X) g(Y)] = E [h(X)] E[g(Y)] E [ h ( X ) g ( Y )] = E [ h ( X )] E [ g ( Y )] is true whenever the functions h h h and g g g have expected values.
Examples Suppose X , Y ∈ U ( 0 , 2 ) X,Y \in U (0,2) X , Y ∈ U ( 0 , 2 ) are independent identically distributed then,
f X ( x ) = { 1 2 if 0 ≤ x ≤ 2 0 otherwise f_X(x) = \begin{cases} \displaystyle\frac{1}{2} & \text{if } 0 \leq x \leq 2 \\ 0 & \text{otherwise} \end{cases} f X ( x ) = ⎩ ⎨ ⎧ 2 1 0 if 0 ≤ x ≤ 2 otherwise and similarly for f Y f_Y f Y .
Next, note that,
f ( x , y ) = f X ( x ) f Y ( y ) = { 1 4 if 0 ≤ x , y ≤ 2 0 otherwise f(x,y) = f_X(x) f_Y(y) = \begin{cases} \displaystyle\frac{1}{4} & \text{if } 0 \leq x,y \leq 2 \\ 0 & \text{otherwise} \end{cases} f ( x , y ) = f X ( x ) f Y ( y ) = ⎩ ⎨ ⎧ 4 1 0 if 0 ≤ x , y ≤ 2 otherwise Also note that f ( x , y ) ≥ 0 f(x,y) \geq 0 f ( x , y ) ≥ 0 for all ( x , y ) ∈ R 2 (x,y) \in \mathbb{R}^2 ( x , y ) ∈ R 2 and
∫ ∫ f ( x , y ) d x d y = ∫ 0 2 ∫ 0 2 1 4 d x d y = 1 4 . 4 = 1 \displaystyle\int\displaystyle\int f(x,y)dxdy = \displaystyle\int_{0}^{2}\displaystyle\int_{0}^{2} \displaystyle\frac {1}{4} dxdy = \displaystyle\frac {1}{4}.4 = 1 ∫∫ f ( x , y ) d x d y = ∫ 0 2 ∫ 0 2 4 1 d x d y = 4 1 .4 = 1
It follows that
E [ X Y ] = ∫ ∫ f ( x , y ) x y d x d y = ∫ 0 2 ∫ 0 2 1 4 x y d x d y = 1 4 ∫ 0 2 y ( ∫ 0 2 x d x ) d y = 1 4 ∫ 0 2 y x 2 2 ∣ 0 2 d y = 1 4 ∫ 0 2 y ( 2 2 2 − 0 2 2 ) d y = 1 4 ∫ 0 2 2 y d y = 1 2 ∫ 0 2 y d y = 1 2 y 2 2 ∣ 0 2 = 1 2 ( 2 2 2 − 0 2 2 ) = 1 2 2 = 1 \begin{aligned} E[XY] &= \int\int f(x,y) xy dxdy \\ &= \int_{0}^{2}\int_{0}^2 \frac{1}{4} xy dxdy \\ &= \frac{1}{4} \int_{0}^{2} y (\int_{0}^{2} x dx) dy \\ &= \frac{1}{4} \int_{0}^{2} y \frac{x^{2}}{2} \vert_{0}^{2} dy \\ &= \frac{1}{4} \int_{0}^{2} y (\frac{2^{2}}{2} - \frac{0^{2}}{2}) dy \\ &= \frac{1}{4} \int_{0}^{2} 2 y dy \\ &= \frac{1}{2} \int_{0}^{2} y dy \\ &= \frac{1}{2} \frac{y^{2}}{2} \vert_{0}^{2} \\ &= \frac{1}{2} (\frac{2^{2}}{2} - \frac{0^{2}}{2}) \\ &= \frac{1}{2} 2 \\ &= 1 \end{aligned} E [ X Y ] = ∫∫ f ( x , y ) x y d x d y = ∫ 0 2 ∫ 0 2 4 1 x y d x d y = 4 1 ∫ 0 2 y ( ∫ 0 2 x d x ) d y = 4 1 ∫ 0 2 y 2 x 2 ∣ 0 2 d y = 4 1 ∫ 0 2 y ( 2 2 2 − 2 0 2 ) d y = 4 1 ∫ 0 2 2 y d y = 2 1 ∫ 0 2 y d y = 2 1 2 y 2 ∣ 0 2 = 2 1 ( 2 2 2 − 2 0 2 ) = 2 1 2 = 1 but
E [ X ] = E [ Y ] = ∫ 0 2 x 1 2 d x = 1 E[X] = E[Y] = \displaystyle\int_{0}^{2} x \displaystyle\frac{1}{2} dx = 1 E [ X ] = E [ Y ] = ∫ 0 2 x 2 1 d x = 1
so
E [ X Y ] = E [ X ] E [ Y ] E[XY] = E[X] E[Y] E [ X Y ] = E [ X ] E [ Y ]
Independence and the Covariance If X X X and Y Y Y are independent then C o v ( X , Y ) = 0 Cov(X,Y)=0 C o v ( X , Y ) = 0
In fact, if X X X and Y Y Y are independent then C o v ( h ( X ) , g ( Y ) ) = 0 Cov(h(X),g(Y))=0 C o v ( h ( X ) , g ( Y )) = 0 for any functions g g g and h h h in which expected values exist.
The Moment Generating Function If X X X is a random variable we define the moment generating function when t t t exists as: M ( t ) : = E [ e t X ] M(t):=E[e^{tX}] M ( t ) := E [ e tX ] .
Examples If X ∼ B i n ( n , p ) X\sim Bin(n,p) X ∼ B in ( n , p ) then M ( t ) = ∑ x = 0 n e t x p ( x ) = ∑ x = 0 n e t x ( n x ) p ⋅ ( 1 − p ) n − x M(t)=\displaystyle\sum_{x=0}^{n} e^{tx}p(x) = \displaystyle\sum_{x=0}^{n} e^{tx} \displaystyle\binom{n}{x}p\cdot (1-p)^{n-x} M ( t ) = x = 0 ∑ n e t x p ( x ) = x = 0 ∑ n e t x ( x n ) p ⋅ ( 1 − p ) n − x
Moments and the Moment Generating Function If M X ( t ) M_{X}(t) M X ( t ) is the moment generating function (mgf) of X X X , then M X ( n ) ( 0 ) = E [ X n ] M_{X}^{(n)}(0)=E[X^n] M X ( n ) ( 0 ) = E [ X n ] .
Details Observe that M ( t ) = E [ e t X ] = E [ 1 + X + ( t X ) 2 2 ! + ( t X ) 3 3 ! + … ] M(t)=E[e^{tX}]=E[1+X+\displaystyle\frac{(tX)^2}{2!}+\displaystyle\frac{(tX)^3}{3!}+\dots] M ( t ) = E [ e tX ] = E [ 1 + X + 2 ! ( tX ) 2 + 3 ! ( tX ) 3 + … ] since e a = 1 + a + a 2 2 ! + a 3 3 ! + … e^a=1+a+\displaystyle\frac{a^2}{2!}+\displaystyle\frac{a^3}{3!}+\dots e a = 1 + a + 2 ! a 2 + 3 ! a 3 + … .
If the random variable e ∣ t X ∣ e^{|tX|} e ∣ tX ∣ has a finite expected value then we can switch the sum and the expected valued to obtain:
M ( t ) = E [ ∑ n = 0 ∞ ( t X ) n n ! ] = ∑ n = 0 ∞ E [ ( t X ) n ] n ! = ∑ n = 0 ∞ t n E [ X n ] n ! M(t)=E\left[\displaystyle\sum_{n=0}^{\infty}\displaystyle\frac{(tX)^n}{n!}\right]=\displaystyle\sum_{n=0}^{\infty}\displaystyle\frac{E[(tX)^n]}{n!}=\displaystyle\sum_{n=0}^{\infty}t^n\displaystyle\frac{E[X^n]}{n!} M ( t ) = E [ n = 0 ∑ ∞ n ! ( tX ) n ] = n = 0 ∑ ∞ n ! E [( tX ) n ] = n = 0 ∑ ∞ t n n ! E [ X n ]
This implies that the n t h n^{th} n t h derivative of M ( t ) M(t) M ( t ) evaluated at t = 0 t=0 t = 0 is exactly E [ X n ] E[X^n] E [ X n ] .
The Moment Generating Function of a Sum of Random Variables M X + Y ( t ) = M X ( t ) ⋅ M Y ( t ) M_{X+Y}(t)=M_{X}(t)\cdot M_{Y}(t) M X + Y ( t ) = M X ( t ) ⋅ M Y ( t ) if X X X and Y Y Y are independent.
Details Let X X X and Y Y Y be independent random vaiables, then
M X + Y ( t ) = E [ e X t + Y t ] = E [ e X t e X t ] = E [ e X t ] E [ e X t ] = M X ( t ) M Y ( t ) M_{X+Y}(t)=E[e^{Xt+Yt}]=E[e^{Xt}e^{Xt}]=E[e^{Xt}]E[e^{Xt}]=M_{X}(t)M_{Y}(t) M X + Y ( t ) = E [ e Xt + Y t ] = E [ e Xt e Xt ] = E [ e Xt ] E [ e Xt ] = M X ( t ) M Y ( t )
Uniqueness of the Moment Generating Function Moment generating functions (m.g.f.
) uniquely determine the probability distribution function for random variables.
Thus, if two random variables have the same moment-generating function, then they must also have the same distribution.